-
Notifications
You must be signed in to change notification settings - Fork 12.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Arm64EC] Improve alignment mangling in arm64ec thunks. #90115
[Arm64EC] Improve alignment mangling in arm64ec thunks. #90115
Conversation
In some cases, MSVC's mangling for arm64ec thunks includes the alignment of a struct. I added some code to try to match... but it never really worked right. The issues: - Alignment is only mangled if it's 16 or more (I guess the default is supposed to be 8). - Alignment isn't mangled on return values (since the memory is allocated by the caller). The current patch leaves hooks to make alignment mangling work... but doesn't actually ever mangle alignment: clang never actually encodes a relevant alignment into the IR. Once we get clang to emit the real size/alignment of structs, we can start emitting it.
@llvm/pr-subscribers-backend-aarch64 Author: Eli Friedman (efriedma-quic) ChangesIn some cases, MSVC's mangling for arm64ec thunks includes the alignment of a struct. I added some code to try to match... but it never really worked right. The issues:
The current patch leaves hooks to make alignment mangling work... but doesn't actually ever mangle alignment: clang never actually encodes a relevant alignment into the IR. Once we get clang to emit the real size/alignment of structs, we can start emitting it. Full diff: https://github.com/llvm/llvm-project/pull/90115.diff 3 Files Affected:
diff --git a/llvm/lib/Target/AArch64/AArch64Arm64ECCallLowering.cpp b/llvm/lib/Target/AArch64/AArch64Arm64ECCallLowering.cpp
index 3bf6283b79e9b8..dddc181b031444 100644
--- a/llvm/lib/Target/AArch64/AArch64Arm64ECCallLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64Arm64ECCallLowering.cpp
@@ -178,13 +178,14 @@ void AArch64Arm64ECCallLowering::getThunkArgTypes(
}
for (unsigned E = FT->getNumParams(); I != E; ++I) {
- Align ParamAlign = AttrList.getParamAlignment(I).valueOrOne();
#if 0
// FIXME: Need more information about argument size; see
// https://reviews.llvm.org/D132926
uint64_t ArgSizeBytes = AttrList.getParamArm64ECArgSizeBytes(I);
+ Align ParamAlign = AttrList.getParamAlignment(I).valueOrOne();
#else
uint64_t ArgSizeBytes = 0;
+ Align ParamAlign = Align();
#endif
Type *Arm64Ty, *X64Ty;
canonicalizeThunkType(FT->getParamType(I), ParamAlign,
@@ -294,7 +295,7 @@ void AArch64Arm64ECCallLowering::canonicalizeThunkType(
uint64_t TotalSizeBytes = ElementCnt * ElementSizePerBytes;
if (ElementTy->isFloatTy() || ElementTy->isDoubleTy()) {
Out << (ElementTy->isFloatTy() ? "F" : "D") << TotalSizeBytes;
- if (Alignment.value() >= 8 && !T->isPointerTy())
+ if (Alignment.value() >= 16 && !Ret)
Out << "a" << Alignment.value();
Arm64Ty = T;
if (TotalSizeBytes <= 8) {
@@ -325,7 +326,7 @@ void AArch64Arm64ECCallLowering::canonicalizeThunkType(
Out << "m";
if (TypeSize != 4)
Out << TypeSize;
- if (Alignment.value() >= 8 && !T->isPointerTy())
+ if (Alignment.value() >= 16 && !Ret)
Out << "a" << Alignment.value();
// FIXME: Try to canonicalize Arm64Ty more thoroughly?
Arm64Ty = T;
diff --git a/llvm/test/CodeGen/AArch64/arm64ec-entry-thunks.ll b/llvm/test/CodeGen/AArch64/arm64ec-entry-thunks.ll
index bb9ba05f7a2724..c00c9bfe127e8a 100644
--- a/llvm/test/CodeGen/AArch64/arm64ec-entry-thunks.ll
+++ b/llvm/test/CodeGen/AArch64/arm64ec-entry-thunks.ll
@@ -223,8 +223,8 @@ define i8 @matches_has_sret() nounwind {
%TSRet = type { i64, i64 }
define void @has_aligned_sret(ptr align 32 sret(%TSRet)) nounwind {
-; CHECK-LABEL: .def $ientry_thunk$cdecl$m16a32$v;
-; CHECK: .section .wowthk$aa,"xr",discard,$ientry_thunk$cdecl$m16a32$v
+; CHECK-LABEL: .def $ientry_thunk$cdecl$m16$v;
+; CHECK: .section .wowthk$aa,"xr",discard,$ientry_thunk$cdecl$m16$v
; CHECK: // %bb.0:
; CHECK-NEXT: stp q6, q7, [sp, #-176]! // 32-byte Folded Spill
; CHECK-NEXT: .seh_save_any_reg_px q6, 176
@@ -457,7 +457,7 @@ define %T2 @simple_struct(%T1 %0, %T2 %1, %T3, %T4) nounwind {
; CHECK-NEXT: .symidx $ientry_thunk$cdecl$i8$v
; CHECK-NEXT: .word 1
; CHECK-NEXT: .symidx "#has_aligned_sret"
-; CHECK-NEXT: .symidx $ientry_thunk$cdecl$m16a32$v
+; CHECK-NEXT: .symidx $ientry_thunk$cdecl$m16$v
; CHECK-NEXT: .word 1
; CHECK-NEXT: .symidx "#small_array"
; CHECK-NEXT: .symidx $ientry_thunk$cdecl$m2$m2F8
diff --git a/llvm/test/CodeGen/AArch64/arm64ec-exit-thunks.ll b/llvm/test/CodeGen/AArch64/arm64ec-exit-thunks.ll
index 3b911e78aff2a4..7a40fcd85ac585 100644
--- a/llvm/test/CodeGen/AArch64/arm64ec-exit-thunks.ll
+++ b/llvm/test/CodeGen/AArch64/arm64ec-exit-thunks.ll
@@ -236,8 +236,8 @@ declare void @has_sret(ptr sret([100 x i8])) nounwind;
%TSRet = type { i64, i64 }
declare void @has_aligned_sret(ptr align 32 sret(%TSRet)) nounwind;
-; CHECK-LABEL: .def $iexit_thunk$cdecl$m16a32$v;
-; CHECK: .section .wowthk$aa,"xr",discard,$iexit_thunk$cdecl$m16a32$v
+; CHECK-LABEL: .def $iexit_thunk$cdecl$m16$v;
+; CHECK: .section .wowthk$aa,"xr",discard,$iexit_thunk$cdecl$m16$v
; CHECK: // %bb.0:
; CHECK-NEXT: sub sp, sp, #48
; CHECK-NEXT: .seh_stackalloc 48
@@ -271,8 +271,8 @@ declare void @has_aligned_sret(ptr align 32 sret(%TSRet)) nounwind;
; CHECK: adrp x11, has_aligned_sret
; CHECK: add x11, x11, :lo12:has_aligned_sret
; CHECK: ldr x9, [x9, :lo12:__os_arm64x_check_icall]
-; CHECK: adrp x10, ($iexit_thunk$cdecl$m16a32$v)
-; CHECK: add x10, x10, :lo12:($iexit_thunk$cdecl$m16a32$v)
+; CHECK: adrp x10, ($iexit_thunk$cdecl$m16$v)
+; CHECK: add x10, x10, :lo12:($iexit_thunk$cdecl$m16$v)
; CHECK: blr x9
; CHECK: .seh_startepilogue
; CHECK: ldr x30, [sp], #16 // 8-byte Folded Reload
@@ -492,7 +492,7 @@ declare %T2 @simple_struct(%T1, %T2, %T3, %T4) nounwind;
; CHECK-NEXT: .symidx has_sret
; CHECK-NEXT: .word 0
; CHECK-NEXT: .symidx has_aligned_sret
-; CHECK-NEXT: .symidx $iexit_thunk$cdecl$m16a32$v
+; CHECK-NEXT: .symidx $iexit_thunk$cdecl$m16$v
; CHECK-NEXT: .word 4
; CHECK-NEXT: .symidx "#has_aligned_sret$exit_thunk"
; CHECK-NEXT: .symidx has_aligned_sret
|
In some cases, MSVC's mangling for arm64ec thunks includes the alignment of a struct. I added some code to try to match... but it never really worked right. The issues: - Alignment is only mangled if it's 16 or more (I guess the default is supposed to be 8). - Alignment isn't mangled on return values (since the memory is allocated by the caller). The current patch leaves hooks to make alignment mangling work... but doesn't actually ever mangle alignment: clang never actually encodes a relevant alignment into the IR. Once we get clang to emit the real size/alignment of structs, we can start emitting it.
In some cases, MSVC's mangling for arm64ec thunks includes the alignment of a struct. I added some code to try to match... but it never really worked right. The issues: - Alignment is only mangled if it's 16 or more (I guess the default is supposed to be 8). - Alignment isn't mangled on return values (since the memory is allocated by the caller). The current patch leaves hooks to make alignment mangling work... but doesn't actually ever mangle alignment: clang never actually encodes a relevant alignment into the IR. Once we get clang to emit the real size/alignment of structs, we can start emitting it.
In some cases, MSVC's mangling for arm64ec thunks includes the alignment of a struct. I added some code to try to match... but it never really worked right. The issues:
The current patch leaves hooks to make alignment mangling work... but doesn't actually ever mangle alignment: clang never actually encodes a relevant alignment into the IR. Once we get clang to emit the real size/alignment of structs, we can start emitting it.